New Training Algorithms for Dependently Initialized Multilayer Perceptrons

نویسندگان

  • Walter H. Delashmit
  • Michael T. Manry
چکیده

Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. New training algorithms are developed where weights and thresholds from a well-trained smaller network are used to initialize a larger network. Methods are also developed to reduce the total amount of training required. It is shown that this technique yields an error curve that is a monotonic nonincreasing function of the number of hidden units and significantly reduces the training complexity. Additional results are presented based on using different probability distributions to generate the initial weights.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Performance analysis of a MLP weight initialization algorithm

The determination of the initial weights is an important issue in multilayer perceptron design. Recently, we have proposed a new approach to weight initialization based on discriminant analysis techniques. In this paper, the performances of multilayer perceptrons (MLPs) initialized by non-parametric discriminant analysis are compared to those of randomly initialized MLPs using several synthetic...

متن کامل

Training neural networks by means of genetic algorithms working on very long chromosomes

In the neural network/genetic algorithm community, rather limited success in the training of neural networks by genetic algorithms has been reported. In a paper by Whitley et al. (1991), he claims that, due to "the multiple representations problem", genetic algorithms will not effectively be able to train multilayer perceptrons, whose chromosomal representation of its weights exceeds 300 bits. ...

متن کامل

Training Multilayer Perceptrons Via Minimization of Sum of Ridge Functions

Motivated by the problem of training multilayer perceptrons in neural networks, we consider the problem of minimizing E(x) = ∑ni=1 fi(ξi · x), where ξi ∈ Rs , 1 i n, and each fi(ξi · x) is a ridge function. We show that when n is small the problem of minimizing E can be treated as one of minimizing univariate functions, and we use the gradient algorithms for minimizing E when n is moderately la...

متن کامل

An Active Learning Algorithm Based on Existing Training Data

A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we ...

متن کامل

Improve an Efficiency of Feedforward Multilayer Perceptrons by Serial Training

The Feedforward Multilayer Perceptrons network is a widely used model in Artificial Neural Network using the backpropagation algorithm for real world data. There are two common ways to construct Feedforward Multilayer Perceptrons network, that is, either taking a large network and then pruning away the irrelevant nodes or starting from a small network and then adding new relevant nodes. An Arti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004